Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Elife ; 132024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38489224

RESUMO

How neural representations preserve information about multiple stimuli is mysterious. Because tuning of individual neurons is coarse (e.g., visual receptive field diameters can exceed perceptual resolution), the populations of neurons potentially responsive to each individual stimulus can overlap, raising the question of how information about each item might be segregated and preserved in the population. We recently reported evidence for a potential solution to this problem: when two stimuli were present, some neurons in the macaque visual cortical areas V1 and V4 exhibited fluctuating firing patterns, as if they responded to only one individual stimulus at a time (Jun et al., 2022). However, whether such an information encoding strategy is ubiquitous in the visual pathway and thus could constitute a general phenomenon remains unknown. Here, we provide new evidence that such fluctuating activity is also evoked by multiple stimuli in visual areas responsible for processing visual motion (middle temporal visual area, MT), and faces (middle fundus and anterolateral face patches in inferotemporal cortex - areas MF and AL), thus extending the scope of circumstances in which fluctuating activity is observed. Furthermore, consistent with our previous results in the early visual area V1, MT exhibits fluctuations between the representations of two stimuli when these form distinguishable objects but not when they fuse into one perceived object, suggesting that fluctuating activity patterns may underlie visual object formation. Taken together, these findings point toward an updated model of how the brain preserves sensory information about multiple stimuli for subsequent processing and behavioral action.


Assuntos
Córtex Visual , Vias Visuais , Vias Visuais/fisiologia , Córtex Visual/fisiologia , Campos Visuais , Neurônios/fisiologia , Estimulação Luminosa
2.
J Neurodev Disord ; 15(1): 40, 2023 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-37964200

RESUMO

BACKGROUND: Neural motor control rests on the dynamic interaction of cortical and subcortical regions, which is reflected in the modulation of oscillatory activity and connectivity in multiple frequency bands. Motor control is thought to be compromised in developmental stuttering, particularly involving circuits in the left hemisphere that support speech, movement initiation, and timing control. However, to date, evidence comes from adult studies, with a limited understanding of motor processes in childhood, closer to the onset of stuttering. METHODS: We investigated the neural control of movement initiation in children who stutter and children who do not stutter by evaluating transient changes in EEG oscillatory activity (power, phase locking to button press) and connectivity (phase synchronization) during a simple button press motor task. We compared temporal changes in these oscillatory dynamics between the left and right hemispheres and between children who stutter and children who do not stutter, using mixed-model analysis of variance. RESULTS: We found reduced modulation of left hemisphere oscillatory power, phase locking to button press and phase connectivity in children who stutter compared to children who do not stutter, consistent with previous findings of dysfunction within the left sensorimotor circuits. Interhemispheric connectivity was weaker at lower frequencies (delta, theta) and stronger in the beta band in children who stutter than in children who do not stutter. CONCLUSIONS: Taken together, these findings indicate weaker engagement of the contralateral left motor network in children who stutter even during low-demand non-speech tasks, and suggest that the right hemisphere might be recruited to support sensorimotor processing in childhood stuttering. Differences in oscillatory dynamics occurred despite comparable task performance between groups, indicating that an altered balance of cortical activity might be a core aspect of stuttering, observable during normal motor behavior.


Assuntos
Gagueira , Adulto , Humanos , Criança , Fala
3.
Hum Brain Mapp ; 44(13): 4812-4829, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37483170

RESUMO

Over the course of literacy development, children learn to recognize word sounds and meanings in print. Yet, they do so differently across alphabetic and character-based orthographies such as English and Chinese. To uncover cross-linguistic influences on children's literacy, we asked young Chinese-English simultaneous bilinguals and English monolinguals (N = 119, ages 5-10) to complete phonological and morphological awareness (MA) literacy tasks. Children completed the tasks in the auditory modality in each of their languages during functional near-infrared spectroscopy neuroimaging. Cross-linguistically, comparisons between bilinguals' two languages revealed that the task that was more central to reading in a given orthography, such as phonological awareness (PA) in English and MA in Chinese, elicited less activation in the left inferior frontal and parietal regions. Group comparisons between bilinguals and monolinguals in English, their shared language of academic instruction, revealed that the left inferior frontal was less active during phonology but more active during morphology in bilinguals relative to monolinguals. MA skills are generally considered to have greater language specificity than PA skills. Bilingual literacy training in a skill that is maximally similar across languages, such as PA, may therefore yield greater automaticity for this skill, as reflected in the lower activation in bilinguals relative to monolinguals. This interpretation is supported by negative correlations between proficiency and brain activation. Together, these findings suggest that both the structural characteristics and literacy experiences with a given language can exert specific influences on bilingual and monolingual children's emerging brain networks for learning to read.


Assuntos
Alfabetização , Multilinguismo , Criança , Humanos , Linguística , Neuroimagem
4.
bioRxiv ; 2023 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-37502939

RESUMO

How neural representations preserve information about multiple stimuli is mysterious. Because tuning of individual neurons is coarse (for example, visual receptive field diameters can exceed perceptual resolution), the populations of neurons potentially responsive to each individual stimulus can overlap, raising the question of how information about each item might be segregated and preserved in the population. We recently reported evidence for a potential solution to this problem: when two stimuli were present, some neurons in the macaque visual cortical areas V1 and V4 exhibited fluctuating firing patterns, as if they responded to only one individual stimulus at a time. However, whether such an information encoding strategy is ubiquitous in the visual pathway and thus could constitute a general phenomenon remains unknown. Here we provide new evidence that such fluctuating activity is also evoked by multiple stimuli in visual areas responsible for processing visual motion (middle temporal visual area, MT), and faces (middle fundus and anterolateral face patches in inferotemporal cortex - areas MF and AL), thus extending the scope of circumstances in which fluctuating activity is observed. Furthermore, consistent with our previous results in the early visual area V1, MT exhibits fluctuations between the representations of two stimuli when these form distinguishable objects but not when they fuse into one perceived object, suggesting that fluctuating activity patterns may underlie visual object formation. Taken together, these findings point toward an updated model of how the brain preserves sensory information about multiple stimuli for subsequent processing and behavioral action. Impact Statement: We find neural fluctuations in multiple areas along the visual cortical hierarchy that could allow the brain to represent distinct co-occurring visual stimuli.

5.
Ann Appl Stat ; 15(1): 41-63, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34413921

RESUMO

Conventional analysis of neuroscience data involves computing average neural activity over a group of trials and/or a period of time. This approach may be particularly problematic when assessing the response patterns of neurons to more than one simultaneously presented stimulus. in such cases the brain must represent each individual component of the stimuli bundle, but trial-and-time-pooled averaging methods are fundamentally unequipped to address the means by which multiitem representation occurs. We introduce and investigate a novel statistical analysis framework that relates the firing pattern of a single cell, exposed to a stimuli bundle, to the ensemble of its firing patterns under each constituent stimulus. Existing statistical tools focus on what may be called "first order stochasticity" in trial-to-trial variation in the form of unstructured noise around a fixed firing rate curve associated with a given stimulus. our analysis is based upon the theoretical premise that exposure to a stimuli bundle induces additional stochasticity in the cell's response pattern in the form of a stochastically varying recombination of its single stimulus firing rate curves. We discuss challenges to statistical estimation of such "second order stochasticity" and address them with a novel dynamic admixture point process (DAPP) model. DAPP is a hierarchical point process model that decomposes second order stochasticity into a Gaussian stochastic process and a random vector of interpretable features and facilitates borrowing of information on the latter across repeated trials through latent clustering. We illustrate the utility and accuracy of the DAPP analysis with synthetic data simulation studies. We present real-world evidence of second order stochastic variation with an analysis of monkey inferior colliculus recordings under auditory stimuli.

6.
J Neurophysiol ; 126(1): 82-94, 2021 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-33852803

RESUMO

Stimulus locations are detected differently by different sensory systems, but ultimately they yield similar percepts and behavioral responses. How the brain transcends initial differences to compute similar codes is unclear. We quantitatively compared the reference frames of two sensory modalities, vision and audition, across three interconnected brain areas involved in generating saccades, namely the frontal eye fields (FEF), lateral and medial parietal cortex (M/LIP), and superior colliculus (SC). We recorded from single neurons in head-restrained monkeys performing auditory- and visually guided saccades from variable initial fixation locations and evaluated whether their receptive fields were better described as eye-centered, head-centered, or hybrid (i.e. not anchored uniquely to head- or eye-orientation). We found a progression of reference frames across areas and across time, with considerable hybrid-ness and persistent differences between modalities during most epochs/brain regions. For both modalities, the SC was more eye-centered than the FEF, which in turn was more eye-centered than the predominantly hybrid M/LIP. In all three areas and temporal epochs from stimulus onset to movement, visual signals were more eye-centered than auditory signals. In the SC and FEF, auditory signals became more eye-centered at the time of the saccade than they were initially after stimulus onset, but only in the SC at the time of the saccade did the auditory signals become "predominantly" eye-centered. The results indicate that visual and auditory signals both undergo transformations, ultimately reaching the same final reference frame but via different dynamics across brain regions and time.NEW & NOTEWORTHY Models for visual-auditory integration posit that visual signals are eye-centered throughout the brain, whereas auditory signals are converted from head-centered to eye-centered coordinates. We show instead that both modalities largely employ hybrid reference frames: neither fully head- nor eye-centered. Across three hubs of the oculomotor network (intraparietal cortex, frontal eye field, and superior colliculus) visual and auditory signals evolve from hybrid to a common eye-centered format via different dynamics across brain areas and time.


Assuntos
Percepção Auditiva/fisiologia , Lobo Frontal/fisiologia , Lobo Parietal/fisiologia , Movimentos Sacádicos/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Animais , Macaca mulatta , Estimulação Luminosa/métodos , Fatores de Tempo
7.
Artigo em Inglês | MEDLINE | ID: mdl-34505116

RESUMO

We recently reported the existence of fluctuations in neural signals that may permit neurons to code multiple simultaneous stimuli sequentially across time [1]. This required deploying a novel statistical approach to permit investigation of neural activity at the scale of individual trials. Here we present tests using synthetic data to assess the sensitivity and specificity of this analysis. We fabricated datasets to match each of several potential response patterns derived from single-stimulus response distributions. In particular, we simulated dual stimulus trial spike counts that reflected fluctuating mixtures of the single stimulus spike counts, stable intermediate averages, single stimulus winner-take-all, or response distributions that were outside the range defined by the single stimulus responses (such as summation or suppression). We then assessed how well the analysis recovered the correct response pattern as a function of the number of simulated trials and the difference between the simulated responses to each "stimulus" alone. We found excellent recovery of the mixture, intermediate, and outside categories (>97% correct), and good recovery of the single/winner-take-all category (>90% correct) when the number of trials was >20 and the single-stimulus response rates were 50Hz and 20Hz respectively. Both larger numbers of trials and greater separation between the single stimulus firing rates improved categorization accuracy. These results provide a benchmark, and guidelines for data collection, for use of this method to investigate coding of multiple items at the individual-trial time scale.

8.
Nat Commun ; 9(1): 2715, 2018 07 13.
Artigo em Inglês | MEDLINE | ID: mdl-30006598

RESUMO

How the brain preserves information about multiple simultaneous items is poorly understood. We report that single neurons can represent multiple stimuli by interleaving signals across time. We record single units in an auditory region, the inferior colliculus, while monkeys localize 1 or 2 simultaneous sounds. During dual-sound trials, we find that some neurons fluctuate between firing rates observed for each single sound, either on a whole-trial or on a sub-trial timescale. These fluctuations are correlated in pairs of neurons, can be predicted by the state of local field potentials prior to sound onset, and, in one monkey, can predict which sound will be reported first. We find corroborating evidence of fluctuating activity patterns in a separate dataset involving responses of inferotemporal cortex neurons to multiple visual stimuli. Alternation between activity patterns corresponding to each of multiple items may therefore be a general strategy to enhance the brain processing capacity, potentially linking such disparate phenomena as variable neural firing, neural oscillations, and limits in attentional/memory capacity.


Assuntos
Potenciais de Ação/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Colículos Inferiores/fisiologia , Neurônios/fisiologia , Estimulação Acústica , Animais , Atenção/fisiologia , Córtex Auditivo/citologia , Eletrodos Implantados , Feminino , Colículos Inferiores/citologia , Macaca mulatta , Neurônios/citologia , Análise de Célula Única , Som , Técnicas Estereotáxicas
9.
J Neurophysiol ; 119(4): 1411-1421, 2018 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-29357464

RESUMO

We accurately perceive the visual scene despite moving our eyes ~3 times per second, an ability that requires incorporation of eye position and retinal information. In this study, we assessed how this neural computation unfolds across three interconnected structures: frontal eye fields (FEF), intraparietal cortex (LIP/MIP), and the superior colliculus (SC). Single-unit activity was assessed in head-restrained monkeys performing visually guided saccades from different initial fixations. As previously shown, the receptive fields of most LIP/MIP neurons shifted to novel positions on the retina for each eye position, and these locations were not clearly related to each other in either eye- or head-centered coordinates (defined as hybrid coordinates). In contrast, the receptive fields of most SC neurons were stable in eye-centered coordinates. In FEF, visual signals were intermediate between those patterns: around 60% were eye-centered, whereas the remainder showed changes in receptive field location, boundaries, or responsiveness that rendered the response patterns hybrid or occasionally head-centered. These results suggest that FEF may act as a transitional step in an evolution of coordinates between LIP/MIP and SC. The persistence across cortical areas of mixed representations that do not provide unequivocal location labels in a consistent reference frame has implications for how these representations must be read out. NEW & NOTEWORTHY How we perceive the world as stable using mobile retinas is poorly understood. We compared the stability of visual receptive fields across different fixation positions in three visuomotor regions. Irregular changes in receptive field position were ubiquitous in intraparietal cortex, evident but less common in the frontal eye fields, and negligible in the superior colliculus (SC), where receptive fields shifted reliably across fixations. Only the SC provides a stable labeled-line code for stimuli across saccades.


Assuntos
Eletroencefalografia/métodos , Fenômenos Eletrofisiológicos , Lobo Frontal/fisiologia , Lobo Parietal/fisiologia , Movimentos Sacádicos/fisiologia , Colículos Superiores/fisiologia , Percepção Visual/fisiologia , Animais , Macaca mulatta
10.
J Neurophysiol ; 115(6): 3162-73, 2016 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-26936983

RESUMO

Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.


Assuntos
Potenciais de Ação/fisiologia , Lobo Frontal/citologia , Neurônios/fisiologia , Movimentos Sacádicos , Campos Visuais/fisiologia , Estimulação Acústica , Análise de Variância , Animais , Feminino , Lobo Frontal/diagnóstico por imagem , Lobo Frontal/fisiologia , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Estimulação Luminosa , Psicofísica , Tempo de Reação
11.
J Cogn Neurosci ; 27(8): 1659-73, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25803599

RESUMO

Categorical perception occurs when a perceiver's stimulus classifications affect their ability to make fine perceptual discriminations and is the most intensively studied form of category learning. On the basis of categorical perception studies, it has been proposed that category learning proceeds by the deformation of an initially homogeneous perceptual space ("perceptual warping"), so that stimuli within the same category are perceived as more similar to each other (more difficult to tell apart) than stimuli that are the same physical distance apart but that belong to different categories. Here, we present a significant counterexample in which robust category learning occurs without these differential perceptual space deformations. Two artificial categories were defined along the dimension of pitch for a perceptually unfamiliar, multidimensional class of sounds. A group of participants (selected on the basis of their listening abilities) were trained to sort sounds into these two arbitrary categories. Category formation, verified empirically, was accompanied by a heightened sensitivity along the entire pitch range, as indicated by changes in an EEG index of implicit perceptual distance (mismatch negativity), with no significant resemblance to the local perceptual deformations predicted by categorical perception. This demonstrates that robust categories can be initially formed within a continuous perceptual dimension without perceptual warping. We suggest that perceptual category formation is a flexible, multistage process sequentially combining different types of learning mechanisms rather than a single process with a universal set of behavioral and neural correlates.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Julgamento/fisiologia , Estimulação Acústica , Adolescente , Adulto , Discriminação Psicológica/fisiologia , Eletroencefalografia , Feminino , Humanos , Aprendizagem/fisiologia , Masculino , Testes Neuropsicológicos , Reconhecimento Psicológico/fisiologia , Adulto Jovem
12.
PLoS One ; 9(1): e87065, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24466328

RESUMO

Pitch and timbre perception are both based on the frequency content of sound, but previous perceptual experiments have disagreed about whether these two dimensions are processed independently from each other. We tested the interaction of pitch and timbre variations using sequential comparisons of sound pairs. Listeners judged whether two sequential sounds were identical along the dimension of either pitch or timbre, while the perceptual distances along both dimensions were parametrically manipulated. Pitch and timbre variations perceptually interfered with each other and the degree of interference was modulated by the magnitude of changes along the un-attended dimension. These results show that pitch and timbre are not orthogonal to each other when both are assessed with parametrically controlled variations.


Assuntos
Percepção Auditiva/fisiologia , Percepção da Altura Sonora/fisiologia , Estimulação Acústica , Adolescente , Adulto , Feminino , Humanos , Masculino , Discriminação da Altura Tonal/fisiologia , Tempo de Reação/fisiologia , Som , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...